Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Automated English essay scoring method based on multi-level semantic features
ZHOU Xianbing, FAN Xiaochao, REN Ge, YANG Yong
Journal of Computer Applications    2021, 41 (8): 2205-2211.   DOI: 10.11772/j.issn.1001-9081.2020101572
Abstract551)      PDF (935KB)(390)       Save
The Automated Essay Scoring (AES) technology can automatically analyze and score the essay, and has become one of the hot research problems in the application of natural language processing technology in the education field. Aiming at the current AES methods that separate deep and shallow semantic features, and ignore the impact of multi-level semantic fusion on essay scoring, a neural network model based on Multi-Level Semantic Features (MLSF) was proposed for AES. Firstly, Convolutional Neural Network (CNN) was used to capture local semantic features, and the hybrid neural network was used to capture global semantic features, so that the essay semantic features were obtained from a deep level. Secondly, the feature of the topic layer was obtained by using the essay topic vector of text level. At the same time, aiming at the grammatical errors and language richness features that are difficult to mine by deep learning model, a small number of artificial features were constructed to obtain the linguistic features of the essay from the shallow level. Finally, the essay was automatically scored through the feature fusion. Experimental results show that the proposed model improves the performance significantly on all subsets of the public dataset of the Kaggle ASAP (Automated Student Assessment Prize) champion, with the average Quadratic Weighted Kappa (QWK) of 79.17%, validating the effectiveness of the model in the AES tasks.
Reference | Related Articles | Metrics
Cleaning scheduling model with constraints and its solution
FAN Xiaomao, XIONG Honglin, ZHAO Gansen
Journal of Computer Applications    2021, 41 (2): 577-582.   DOI: 10.11772/j.issn.1001-9081.2020050735
Abstract470)      PDF (876KB)(387)       Save
Cleaning tasks of the cleaning service company often have the characteristics such as different levels, different durations and different cycles, and lack a general cleaning scheduling problem model. At present, the solving of cleaning scheduling problem is mainly relies on manual scheduling scheme, causing the problems such as time-consuming, labor-consuming and unstable scheduling quality. Therefore, a mathematical model of cleaning scheduling problem with constraints, which is a NP-hard problem, was proposed, then Simulated Annealing algorithm (SA), Bee Colony Optimization algorithm (BCO), Ant Colony Optimization algorithm (ACO), and Particle Swarm Optimization algorithm (PSO) were utilized to solve the proposed constrained cleaning scheduling problem. Finally, an empirical analysis was carried out by using the real scheduling state of a cleaning service company. Experimental results show that compared with the manual scheduling scheme, the heuristic intelligent optimization algorithms have obvious advantages in solving the constrained cleaning scheduling problem, and the manpower demand of the obtained cleaning schedule reduced significantly. Specifically, these algorithms can make the cleaning manpower in one year scheduling cycle be saved by 218.62 hours to 513.30 hours compared to manual scheduling scheme. It can be seen that the mathematical models based on heuristic intelligent optimization algorithms are feasible and efficient in solving cleaning scheduling problem with constraints, and provide making-decision supports for the scientific management of the cleaning service company.
Reference | Related Articles | Metrics
Network faulty link identification based on linear programming relaxation method
FAN Xiaobo, LI Xingming
Journal of Computer Applications    2018, 38 (7): 2005-2008.   DOI: 10.11772/j.issn.1001-9081.2018010155
Abstract491)      PDF (628KB)(278)       Save
Concerning the NP (Nondeterministic Polynomial)-hard problem of localizing link failures from end-to-end measurement in communication network, a new tomography method which relaxes the Boolean constraints was proposed. Firstly, the relationship between path state and link state in a network was modeled as Boolean algebraic equations, and the essence of localizing failures was treated as an optimization problem under the constraints of these equations. Then the NP property was derived from the binary states (normal/failed) of link state in the optimization problem. Therefore, by relaxing the Boolean constraints, the proposed method simply transformed the problem into a Linear Programming (LP) problem, which can be easily solved by any LP solver to get the set of failed links. Simulation experiments were conducted to identity link failures in real network topologies. The experimental results show that the false negative rate of the proposed method is 5%-30% lower than that of the classical heuristic algorithm TOMO (TOMOgraphy).
Reference | Related Articles | Metrics
Dense noise face recognition based on sparse representation and algorithm optimization
CAI Ti-jian FAN Xiao-ping LIU Jun-xiong
Journal of Computer Applications    2012, 32 (08): 2313-2319.   DOI: 10.3724/SP.J.1087.2012.02313
Abstract965)      PDF (611KB)(376)       Save
To improve the speed and anti-noise performance of face recognition based on sparse representation, the Cross-And-Bouquet (CAB) model and Compressed Sensing (CS) reconstruction algorithm were studied. Concerning the large matrix inversion of reconstruction algorithm, a Fast Orthogonal Matching Pursuit (FOMP) algorithm was proposed. The proposed algorithm could convert the high complexity operations of matrix inversion into the lightweight operation of vector-matrix computation. To increase the amount of effective information in dense noise pictures, several practical and efficient methods were put forward. The experimental results verify that these methods can effectively improve the face recognition rate in dense noise cases, and identifiable noise ratio can reach up to 75%. These methods are of practical values.
Reference | Related Articles | Metrics
Message generation and store model based on HL7V2.x protocol
FAN Xiao HUANG Qing-song
Journal of Computer Applications    2011, 31 (12): 3418-3421.  
Abstract1398)      PDF (561KB)(29355)       Save
Data can not be communicated so well according to the unified data standards among the health information systems in our country that information sharing is impeded. On the basis of the HL7 V2.x (health level seven Version2.x) protocol of health information communication standard and the theory of HL7V2.x message parsing, the method in which data in the database fields is extracted to generate HL7V2.x messages and parsed message information is stored in the specific database is introduced in detail. Then an optimized model of the message store and the message generation is brought forward. In this model, mapping files are designed to save much time in redundant manual configurations and the hash table is used as a data structure during the process of message generation to make it more efficient. Simulation experiment results show the feasibility of the model.
Related Articles | Metrics
Improved SVM co-training based intrusion detection
WU Shu-yue YU Jie FAN Xiao-ping
Journal of Computer Applications    2011, 31 (12): 3337-3339.  
Abstract1171)      PDF (467KB)(610)       Save
In this paper, a Support Vector Machine (SVM) co-training based method with variation factors to detect network intrusion was proposed. It made full use of the large amount of unlabeled data, and increased the detection accuracy and stability by co-training two classifiers. It further introduced variation factors among multiple iterations to decrease the possibility of effect reduction due to over-learning. The simulation results show that the proposed method is 7.72% more accurate than the traditional SVM method, and it depends less on the training dataset and test dataset.
Related Articles | Metrics
Research of FAQ based on the semantic similarities of HowNet
Ke-Liang JIA FAN Xiao-zhong FAN ZHANG Yu ZHANG
Journal of Computer Applications   
Abstract1884)      PDF (378KB)(726)       Save
Frequently Asked Question (FAQ) is the main method to provide online help. A candidate question set was built up according to the user's query by search engine. Semantic similarities of sentences, based on HowNet, were computed between the user's query and the candidate questions. The answer, which is the most similar to the query, was returned to the user. Experiments show that the method has done better in the matching between questions and answers.
Related Articles | Metrics
Study of IP supporting network traffic of iPAS system
SU Guang-wen, GAO De-yuan, FAN Xiao-ya, YAN Han
Journal of Computer Applications    2005, 25 (05): 1182-1184.   DOI: 10.3724/SP.J.1087.2005.1182
Abstract786)      PDF (173KB)(700)       Save
IP supporting network traffic of a large-scale commercial iPAS System was analyzed with variance-time graph. The result was that the traffic fits to light level self-similarity process, meaings that the traffic had the characteristic of burst, but not very strong. Analysis indicated that IP supporting network traffic distribution was very complicated and could’t be expressed as typical distribution usually used. Peak value of the traffic showed the burst. Study in this paper showed that for the same traffic process, varying sampling time scale results from different Peak value. So, in order to gain the necessary detail of the traffic, sampling time scale must be selected correctly.
Related Articles | Metrics
Implementation of automated function test for kernel network protocol stack
LIU Yuan, WANG Kai-yun, FAN Xiao-lan, JIANG Jian-guo
Journal of Computer Applications    2005, 25 (05): 1052-1054.   DOI: 10.3724/SP.J.1087.2005.1052
Abstract812)      PDF (148KB)(1019)       Save
Based on the research of manual testing process, combined with the technology of UML and Expect program, an automated testing model applied to the function test of network protocol stack software in the kernel of the operating system was proposed. In Linux operating system, an automated test case using only one computer was exemplified to verify the feasibility of the model and its corresponding technology. This model settles the test problem of automated network configuration and data-driven, and at the same time reduces the requirements on hardware resources.
Related Articles | Metrics
Application in automatic abstracting for text clustering
GUO Qing-lin, FAN Xiao-zhong, LIU Chang-an
Journal of Computer Applications    2005, 25 (05): 1036-1038.   DOI: 10.3724/SP.J.1087.2005.1036
Abstract2072)      PDF (161KB)(716)       Save
The method of automatic abstracting based on text clustering was brought forward to overcome the shortages of the current methods of automatic abstracting. This method used text clustering, which realized automatic abstracting of multi-document. For a specific plastic domain an automatic abstracting system named TCAAS based on text clustering was implemented, whose precision and recall was above 80%. And the precision and recall of automatic abstracting of multi-document was above 75%. Experiments proved that it is feasible to use the method to develop an automatic abstracting system, which is valuable for further study in more depth.
Related Articles | Metrics
LIU Xiang-rui, ZHU Jian-yong, FAN Xiao-zhong
Journal of Computer Applications    2005, 25 (02): 430-433.   DOI: 10.3724/SP.J.1087.2005.0430
Abstract1015)      PDF (188KB)(856)       Save
The design and simulation of resource mapping algorithms in grid environment is a difficult undertaking, mainly due to resource heterogeneity, geographic distribution, autonomy and the requirements of the QoS of tasks. This paper introduced auction into resource mapping and presented a resource mapping algorithm based on multi-item auction in grid. The algorithm could satisfy the requirements of the grid environment and solve the problem of the resource mapping for a set of independent tasks. And some simulation toolkits of resource mapping were analyzed. A simulation environment is established based on the Gridsim toolkit and the simulation experiments indicate the good performance of the algorithm.
Related Articles | Metrics
Analysis and validation of communication protocol based on Petri Net
WANG Jing, FAN Xiao-ya, CAO Qing
Journal of Computer Applications    2005, 25 (01): 165-167.   DOI: 10.3724/SP.J.1087.2005.0165
Abstract1103)      PDF (143KB)(1208)       Save

Based on a PDA project, a communication protocol which communicated through PSTN was introduced. Then, a series of representative link layer protocol models using Petri net were described abstractly. By the direction of Petri net emulation tool, these models were modified and improved gradually. Finally, a protocol model which could be applied in the practical work was achieved.

Related Articles | Metrics
Auto extracting for lexicalized tree adjoining grammar
XU Yun, FAN Xiao-zhong, ZHANG Feng
Journal of Computer Applications    2005, 25 (01): 4-6.   DOI: 10.3724/SP.J.1087.2005.00004
Abstract984)      PDF (127KB)(1277)       Save
An algorithm of the extracting Lexicalized Tree Adjoining Grammar(LTAG) from Penn Chinese corpus was presented. Idea of the algorithm is to induce three kinds of trees from lexicalized tree bank. Then the method of Head-driven Phrase Structure Grammar(HPSG) was applied to extract lexicalized tree from corpus. In the end,invalid lexicalized trees were filtered out by linguistic rules. It requires fewer human efforts compared with hand-crafted grammar. It is possible to remedy omission of grammatical syntactic structures in hand-crafted grammar.
Related Articles | Metrics